Layer-wise training of deep networks using kernel similarity
نویسندگان
چکیده
Deep learning has shown promising results in many machine learning applications. The hierarchical feature representation built by deep networks enable compact and precise encoding of the data. A kernel analysis of the trained deep networks demonstrated that with deeper layers, more simple and more accurate data representations are obtained. In this paper, we propose an approach for layer-wise training of a deep network for the supervised classification task. A transformation matrix of each layer is obtained by solving an optimization aimed at a better representation where a subsequent layer builds its representation on the top of the features produced by a previous layer. We compared the performance of our approach with a DNN trained using back-propagation which has same architecture as ours. Experimental results on the real image datasets demonstrate efficacy of our approach. We also performed kernel analysis of layer representations to validate the claim of better feature encoding.
منابع مشابه
Kernel Analysis of Deep Networks
When training deep networks it is common knowledge that an efficient and well generalizing representation of the problem is formed. In this paper we aim to elucidate what makes the emerging representation successful. We analyze the layer-wise evolution of the representation in a deep network by building a sequence of deeper and deeper kernels that subsume the mapping performed by more and more ...
متن کاملRecognition Using Dee Networks
We develop a technique using deep for human facial expression recognition. Images of preprocessed with photometric normalization manipulation to remove illumination variance. F then extracted by convolving each preprocessed Gabor filters. Kernel PCA is applied to feature them into the deep neural network that consists o hidden layers and a softmax classifier. The deep n using greedy layer-wise ...
متن کاملFaster learning of deep stacked autoencoders on multi-core systems using synchronized layer-wise pre-training
Deep neural networks are capable of modelling highly nonlinear functions by capturing different levels of abstraction of data hierarchically. While training deep networks, first the system is initialized near a good optimum by greedy layer-wise unsupervised pre-training. However, with burgeoning data and increasing dimensions of the architecture, the time complexity of this approach becomes eno...
متن کاملOptimizing Kernel Machines using Deep Learning
Building highly non-linear and non-parametric models is central to several state-of-the-art machine learning systems. Kernel methods form an important class of techniques that induce a reproducing kernel Hilbert space (RKHS) for inferring non-linear models through the construction of similarity functions from data. These methods are particularly preferred in cases where the training data sizes ...
متن کاملA Pipelined Pre-training Algorithm for DBNs
Deep networks have been widely used in many domains in recent years. However, the pre-training of deep networks is time consuming with greedy layer-wise algorithm, and the scalability of this algorithm is greatly restricted by its inherently sequential nature where only one hidden layer can be trained at one time. In order to speed up the training of deep networks, this paper mainly focuses on ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1703.07115 شماره
صفحات -
تاریخ انتشار 2017